Goto

Collaborating Authors

 bayesian neural network inference


Bayesian Neural Network Inference via Implicit Models and the Posterior Predictive Distribution

Dabrowski, Joel Janek, Pagendam, Daniel Edward

arXiv.org Artificial Intelligence

We propose a novel approach to perform approximate Bayesian inference in complex models such as Bayesian neural networks. The approach is more scalable to large data than Markov Chain Monte Carlo, it embraces more expressive models than Variational Inference, and it does not rely on adversarial training (or density ratio estimation). We adopt the recent approach of constructing two models: (1) a primary model, tasked with performing regression or classification; and (2) a secondary, expressive (e.g. implicit) model that defines an approximate posterior distribution over the parameters of the primary model. However, we optimise the parameters of the posterior model via gradient descent according to a Monte Carlo estimate of the posterior predictive distribution -- which is our only approximation (other than the posterior model). Only a likelihood needs to be specified, which can take various forms such as loss functions and synthetic likelihoods, thus providing a form of a likelihood-free approach. Furthermore, we formulate the approach such that the posterior samples can either be independent of, or conditionally dependent upon the inputs to the primary model. The latter approach is shown to be capable of increasing the apparent complexity of the primary model. We see this being useful in applications such as surrogate and physics-based models. To promote how the Bayesian paradigm offers more than just uncertainty quantification, we demonstrate: uncertainty quantification, multi-modality, as well as an application with a recent deep forecasting neural network architecture.


BNNpriors: A library for Bayesian neural network inference with different prior distributions

Fortuin, Vincent, Garriga-Alonso, Adrià, van der Wilk, Mark, Aitchison, Laurence

arXiv.org Machine Learning

Bayesian neural networks have shown great promise in many applications where calibrated uncertainty estimates are crucial and can often also lead to a higher predictive performance. However, it remains challenging to choose a good prior distribution over their weights. While isotropic Gaussian priors are often chosen in practice due to their simplicity, they do not reflect our true prior beliefs well and can lead to suboptimal performance. Our new library, BNNpriors, enables state-of-the-art Markov Chain Monte Carlo inference on Bayesian neural networks with a wide range of predefined priors, including heavy-tailed ones, hierarchical ones, and mixture priors. Moreover, it follows a modular approach that eases the design and implementation of new custom priors. It has facilitated foundational discoveries on the nature of the cold posterior effect in Bayesian neural networks and will hopefully catalyze future research as well as practical applications in this area.


Quality of Uncertainty Quantification for Bayesian Neural Network Inference

Yao, Jiayu, Pan, Weiwei, Ghosh, Soumya, Doshi-Velez, Finale

arXiv.org Machine Learning

Bayesian Neural Networks (BNNs) place priors There exists a large body of work to improve the quality of over the parameters in a neural network. Inference inference for Bayesian neural networks (BNNs) by improving in BNNs, however, is difficult; all inference the approximate inference procedure (e.g. Graves 2011; methods for BNNs are approximate. In this work, Blundell et al. 2015; Hernández-Lobato et al. 2016, to name we empirically compare the quality of predictive a few), or by improving the flexibility of the variational uncertainty estimates for 10 common inference approximation for variational inference (e.g.